-
Notifications
You must be signed in to change notification settings - Fork 1.2k
[DOCS-12667] Add log collection setup information #32974
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Preview links (active after the
|
tedkahwaji
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we include the necessary GCP & Datadog permissions needed to run QuickStart
| **Note**: Only folders and projects that you have the necessary access and permissions for appear in this section. Likewise, folders and projects without a display name do not appear. | ||
| 1. In the **Dataflow Job Configuration** section, specify configuration options for the Dataflow job: | ||
| - Select deployment settings (Google Cloud region and project to host the created resources---Pub/Sub topics and subscriptions, a log routing sink, a Secret Manager entry, a service account, a Cloud Storage bucket, and a Dataflow job) | ||
| **Note**: You cannot name the created resources---the script uses predefined names, so it can skip creation if it finds preexisting resources with the same name. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's remove this entirely, sorry I know the documentation request doc included this part; but I prefer we keep this part out of the public documentation
| - Click **Open Google Cloud Shell** to run the script in the [Google Cloud Shell][102]. | ||
| 1. After running the script, return to the Google Cloud integration tile. | ||
| 1. In the **Select Projects** section, select the folders and projects to forward logs from. If you select a folder, logs are forwarded from all of its child projects. | ||
| **Note**: Only folders and projects that you have the necessary access and permissions for appear in this section. Likewise, folders and projects without a display name do not appear. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we have the Notes on their own line
| **Note**: You cannot name the created resources---the script uses predefined names, so it can skip creation if it finds preexisting resources with the same name. | ||
| - Select scaling settings (number of workers and maximum workers) | ||
| - Select performance settings (maximum number of parallel requests and batch size) | ||
| - Select execution options (Streaming Engine is enabled by default; read more about its [benefits][103]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would remove Streaming Engine is enabled by default; read more about its [benefits][103]
| - Select scaling settings (number of workers and maximum workers) | ||
| - Select performance settings (maximum number of parallel requests and batch size) | ||
| - Select execution options (Streaming Engine is enabled by default; read more about its [benefits][103]) | ||
| **Note**: If you select to enable [Dataflow Prime][104], you cannot configure worker machine type in the **Advanced Configuration** section. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's remove this part; it's pretty evident in the UI that you can't pick a machine type if dataflow prime is enabled, there is a message box that appears and states that
| 1. In the **Dataflow Job Configuration** section, specify configuration options for the Dataflow job: | ||
| - Select deployment settings (Google Cloud region and project to host the created resources---Pub/Sub topics and subscriptions, a log routing sink, a Secret Manager entry, a service account, a Cloud Storage bucket, and a Dataflow job) | ||
| **Note**: You cannot name the created resources---the script uses predefined names, so it can skip creation if it finds preexisting resources with the same name. | ||
| - Select scaling settings (number of workers and maximum workers) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's remove number of workers here since that's not supported in the terraform flow
What does this PR do? What is the motivation?
Merge instructions
Merge readiness:
For Datadog employees:
Your branch name MUST follow the
<name>/<description>convention and include the forward slash (/). Without this format, your pull request will not pass CI, the GitLab pipeline will not run, and you won't get a branch preview. Getting a branch preview makes it easier for us to check any issues with your PR, such as broken links.If your branch doesn't follow this format, rename it or create a new branch and PR.
[6/5/2025] Merge queue has been disabled on the documentation repo. If you have write access to the repo, the PR has been reviewed by a Documentation team member, and all of the required checks have passed, you can use the Squash and Merge button to merge the PR. If you don't have write access, or you need help, reach out in the #documentation channel in Slack.
Additional notes